16 research outputs found

    The University of Sussex-Huawei locomotion and transportation dataset for multimodal analytics with mobile devices

    Get PDF
    Scientific advances build on reproducible research which need publicly available benchmark datasets. The computer vision and speech recognition communities have led the way in establishing benchmark datasets. There are much less datasets available in mobile computing, especially for rich locomotion and transportation analytics. This paper presents a highly versatile and precisely annotated large-scale dataset of smartphone sensor data for multimodal locomotion and transportation analytics of mobile users. The dataset comprises 7 months of measurements, collected from all sensors of 4 smartphones carried at typical body locations, including the images of a body-worn camera, while 3 participants used 8 different modes of transportation in the southeast of the United Kingdom, including in London. In total 28 context labels were annotated, including transportation mode, participant’s posture, inside/outside location, road conditions, traffic conditions, presence in tunnels, social interactions, and having meals. The total amount of collected data exceed 950 GB of sensor data, which corresponds to 2812 hours of labelled data and 17562 km of traveled distance. We present how we set up the data collection, including the equipment used and the experimental protocol. We discuss the dataset, including the data curation process, the analysis of the annotations and of the sensor data. We discuss the challenges encountered and present the lessons learned and some of the best practices we developed to ensure high quality data collection and annotation. We discuss the potential applications which can be developed using this large-scale dataset. In particular, we present how a machine-learning system can use this dataset to automatically recognize modes of transportations. Many other research questions related to transportation analytics, activity recognition, radio signal propagation and mobility modelling can be adressed through this dataset. The full dataset is being made available to the community, and a thorough preview is already publishe

    WLCSSLearn: learning algorithm for template matching-based gesture recognition systems

    No full text
    Template matching algorithms are well suited for gesture recognition, but unlike other machine learning approaches there are no established methods to optimize their parameters. We present WLCSSLearn: an optimization approach for the WarpingLCSS algorithm based on a genetic algorithms. We demonstrate that WLCSSLearn makes the optimization procedure automatic, fast and suitable for new recognition problems even when there is no a-priori knowledge about suitable range of parameter values. We evaluate WLCSSLearn on three different datasets of gestures. We demonstrated that our method increased the accuracy and F1 score up to 20% compared to previous literature

    Exploring human activity annotation using a privacy preserving 3D model

    No full text
    Annotating activity recognition datasets is a very time consuming process. Using lay annotators (e.g. using crowdsourcing) has been suggested to speed this up. However, this requires to preserve privacy of users and may preclude relying on video for annotation. We investigate to which extent using a 3D human model animated from the data of inertial sensors placed on the limbs allows for annotation of human activities. The animated model is shown to 6 people in a suite of tests in order to understand the accuracy of the labelling. We present the model and the dataset, then we present the experiments including the number of activities. We present 3 experiments where we investigate the use of a 3D model for i) activity segmentation, ii) for "openended" annotation where users freely describe the activity they see on screen, and iii) traditional annotation, where users pick one activity among a pre-defined list of activities. In the latter case, results show that users recognise with 56% accuracy when picking from 11 possible activities

    Three-year review of the 2018–2020 SHL challenge on transportation and locomotion mode recognition from mobile sensors

    Get PDF
    The Sussex-Huawei Locomotion-Transportation (SHL) Recognition Challenges aim to advance and capture the state-of-the-art in locomotion and transportation mode recognition from smartphone motion (inertial) sensors. The goal of this series of machine learning and data science challenges was to recognize eight locomotion and transportation activities (Still, Walk, Run, Bus, Car, Train, Subway). The three challenges focused on time-independent (SHL 2018), position-independent (SHL 2019) and userindependent (SHL 2020) evaluations, respectively. Overall, we received 48 submissions (out of 93 teams who registered interest) involving 201 scientists over the three years. The survey captures the state-of-the-art through a meta-analysis of the contributions to the three challenges, including approaches, recognition performance, computational requirements, software tools and frameworks used. It was shown that state-of-the-art methods can distinguish with relative ease most modes of transportation, although the differentiating between subtly distinct activities, such as rail transport (Train and Subway) and road transport (Bus and Car) still remains challenging. We summarize insightful methods from participants that could be employed to address practical challenges of transportation mode recognition, for instance, to tackle over-fitting, to employ robust representations, to exploit data augmentation, and to exploit smart post-processing techniques to improve performance. Finally, we present baseline results to compare the three challenges with a unified recognition pipeline and decision window length
    corecore